108 research outputs found
Soccer on Your Tabletop
We present a system that transforms a monocular video of a soccer game into a
moving 3D reconstruction, in which the players and field can be rendered
interactively with a 3D viewer or through an Augmented Reality device. At the
heart of our paper is an approach to estimate the depth map of each player,
using a CNN that is trained on 3D player data extracted from soccer video
games. We compare with state of the art body pose and depth estimation
techniques, and show results on both synthetic ground truth benchmarks, and
real YouTube soccer footage.Comment: CVPR'18. Project: http://grail.cs.washington.edu/projects/soccer
PersonNeRF: Personalized Reconstruction from Photo Collections
We present PersonNeRF, a method that takes a collection of photos of a
subject (e.g. Roger Federer) captured across multiple years with arbitrary body
poses and appearances, and enables rendering the subject with arbitrary novel
combinations of viewpoint, body pose, and appearance. PersonNeRF builds a
customized neural volumetric 3D model of the subject that is able to render an
entire space spanned by camera viewpoint, body pose, and appearance. A central
challenge in this task is dealing with sparse observations; a given body pose
is likely only observed by a single viewpoint with a single appearance, and a
given appearance is only observed under a handful of different body poses. We
address this issue by recovering a canonical T-pose neural volumetric
representation of the subject that allows for changing appearance across
different observations, but uses a shared pose-dependent motion field across
all observations. We demonstrate that this approach, along with regularization
of the recovered volumetric geometry to encourage smoothness, is able to
recover a model that renders compelling images from novel combinations of
viewpoint, pose, and appearance from these challenging unstructured photo
collections, outperforming prior work for free-viewpoint human rendering.Comment: Project Page: https://grail.cs.washington.edu/projects/personnerf
Mates2Motion: Learning How Mechanical CAD Assemblies Work
We describe our work on inferring the degrees of freedom between mated parts
in mechanical assemblies using deep learning on CAD representations. We train
our model using a large dataset of real-world mechanical assemblies consisting
of CAD parts and mates joining them together. We present methods for
re-defining these mates to make them better reflect the motion of the assembly,
as well as narrowing down the possible axes of motion. We also conduct a user
study to create a motion-annotated test set with more reliable labels.Comment: Contains 5 pages, 2 figures. Presented at the ICML 2022 Workshop on
Machine Learning in Computational Desig
- …